Riemannian优化是解决优化问题的原则框架,其中所需的最佳被限制为光滑的歧管$ \ Mathcal {M} $。在此框架中设计的算法通常需要对歧管的几何描述,该描述通常包括切线空间,缩回和成本函数的梯度。但是,在许多情况下,由于缺乏信息或棘手的性能,只能访问这些元素的子集(或根本没有)。在本文中,我们提出了一种新颖的方法,可以在这种情况下执行近似Riemannian优化,其中约束歧管是$ \ r^{d} $的子手机。至少,我们的方法仅需要一组无噪用的成本函数$(\ x_ {i},y_ {i})\ in {\ mathcal {m}} \ times \ times \ times \ times \ times \ mathbb {r} $和内在的歧管$ \ MATHCAL {M} $的维度。使用样品,并利用歧管-MLS框架(Sober和Levin 2020),我们构建了缺少的组件的近似值,这些组件娱乐可证明的保证并分析其计算成本。如果某些组件通过分析给出(例如,如果成本函数及其梯度明确给出,或者可以计算切线空间),则可以轻松地适应该算法以使用准确的表达式而不是近似值。我们使用我们的方法分析了基于Riemannian梯度的方法的全球收敛性,并从经验上证明了该方法的强度,以及基于类似原理的共轭梯度类型方法。
translated by 谷歌翻译
Models in which the covariance matrix has the structure of a sparse matrix plus a low rank perturbation are ubiquitous in machine learning applications. It is often desirable for learning algorithms to take advantage of such structures, avoiding costly matrix computations that often require cubic time and quadratic storage. This is often accomplished by performing operations that maintain such structures, e.g. matrix inversion via the Sherman-Morrison-Woodbury formula. In this paper we consider the matrix square root and inverse square root operations. Given a low rank perturbation to a matrix, we argue that a low-rank approximate correction to the (inverse) square root exists. We do so by establishing a geometric decay bound on the true correction's eigenvalues. We then proceed to frame the correction has the solution of an algebraic Ricatti equation, and discuss how a low-rank solution to that equation can be computed. We analyze the approximation error incurred when approximately solving the algebraic Ricatti equation, providing spectral and Frobenius norm forward and backward error bounds. Finally, we describe several applications of our algorithms, and demonstrate their utility in numerical experiments.
translated by 谷歌翻译
精密医学是疾病预防,检测和治疗的临床方法,旨在考虑每个人的遗传背景,环境和生活方式。这种量身定制的大道的发展是由常规方法的可用性,大群体样本的增加以及与临床数据的集成而导致的。尽管进展巨大,但数据分析的现有计算方法无法为该复合体,高维和纵向数据提供适当的解决方案。在这项工作中,我们开发了一种称为TCAM的新方法,这是用于多向数据的维度减少技术,克服纵向常规数据的轨迹分析时克服了主要限制。使用现实世界数据,我们表明TCAM优于传统方法,以及最先进的基于卷起的纵向微生物组数据分析方法。此外,我们通过将其应用于几个不同的OMIC数据集来证明TCAM的多功能性,以及它在直接的ML任务中的替换中的适用性。
translated by 谷歌翻译
神经切线内核(NTK)表征无限宽的神经网络的行为通过梯度下降训练在最小方形损失下训练。最近的作品还报告说,NTK回归可以优于在小型数据集上培训的有限范围的神经网络。然而,内核方法的计算复杂性限制了在大规模学习任务中的使用。为了加速NTK学习,我们设计了NTK的近输入 - 稀疏时间近似算法,通过绘制arc-anine内核的多项式扩展:我们的NTK卷积对应物的草图(CNTK)可以使用线性运行时转换任何图像像素数。此外,通过将随机特征(基于杠杆分数采样)与草图算法组合,我们证明了NTK矩阵的光谱近似保证。我们在各种大规模回归和分类任务上基准于我们的方法,并显示在我们的CNTK特征上培训的线性回归线符合CIFAR-10数据集上精确CNTK的准确性,同时实现了150倍的加速。
translated by 谷歌翻译
Directed information (DI) is a fundamental measure for the study and analysis of sequential stochastic models. In particular, when optimized over input distributions it characterizes the capacity of general communication channels. However, analytic computation of DI is typically intractable and existing optimization techniques over discrete input alphabets require knowledge of the channel model, which renders them inapplicable when only samples are available. To overcome these limitations, we propose a novel estimation-optimization framework for DI over discrete input spaces. We formulate DI optimization as a Markov decision process and leverage reinforcement learning techniques to optimize a deep generative model of the input process probability mass function (PMF). Combining this optimizer with the recently developed DI neural estimator, we obtain an end-to-end estimation-optimization algorithm which is applied to estimating the (feedforward and feedback) capacity of various discrete channels with memory. Furthermore, we demonstrate how to use the optimized PMF model to (i) obtain theoretical bounds on the feedback capacity of unifilar finite-state channels; and (ii) perform probabilistic shaping of constellations in the peak power-constrained additive white Gaussian noise channel.
translated by 谷歌翻译
We construct a universally Bayes consistent learning rule that satisfies differential privacy (DP). We first handle the setting of binary classification and then extend our rule to the more general setting of density estimation (with respect to the total variation metric). The existence of a universally consistent DP learner reveals a stark difference with the distribution-free PAC model. Indeed, in the latter DP learning is extremely limited: even one-dimensional linear classifiers are not privately learnable in this stringent model. Our result thus demonstrates that by allowing the learning rate to depend on the target distribution, one can circumvent the above-mentioned impossibility result and in fact, learn \emph{arbitrary} distributions by a single DP algorithm. As an application, we prove that any VC class can be privately learned in a semi-supervised setting with a near-optimal \emph{labeled} sample complexity of $\tilde{O}(d/\varepsilon)$ labeled examples (and with an unlabeled sample complexity that can depend on the target distribution).
translated by 谷歌翻译
Recently proposed Gated Linear Networks present a tractable nonlinear network architecture, and exhibit interesting capabilities such as learning with local error signals and reduced forgetting in sequential learning. In this work, we introduce a novel gating architecture, named Globally Gated Deep Linear Networks (GGDLNs) where gating units are shared among all processing units in each layer, thereby decoupling the architectures of the nonlinear but unlearned gatings and the learned linear processing motifs. We derive exact equations for the generalization properties in these networks in the finite-width thermodynamic limit, defined by $P,N\rightarrow\infty, P/N\sim O(1)$, where P and N are the training sample size and the network width respectively. We find that the statistics of the network predictor can be expressed in terms of kernels that undergo shape renormalization through a data-dependent matrix compared to the GP kernels. Our theory accurately captures the behavior of finite width GGDLNs trained with gradient descent dynamics. We show that kernel shape renormalization gives rise to rich generalization properties w.r.t. network width, depth and L2 regularization amplitude. Interestingly, networks with sufficient gating units behave similarly to standard ReLU networks. Although gatings in the model do not participate in supervised learning, we show the utility of unsupervised learning of the gating parameters. Additionally, our theory allows the evaluation of the network's ability for learning multiple tasks by incorporating task-relevant information into the gating units. In summary, our work is the first exact theoretical solution of learning in a family of nonlinear networks with finite width. The rich and diverse behavior of the GGDLNs suggests that they are helpful analytically tractable models of learning single and multiple tasks, in finite-width nonlinear deep networks.
translated by 谷歌翻译
面部影响的成像可用于通过成年后的儿童进行心理生理属性,特别是用于监测自闭症谱系障碍等终身疾病。深度卷积神经网络在对成年人的面部表情进行分类方面表现出了令人鼓舞的结果。但是,经过成人基准数据培训的分类器模型由于心理物理发展的差异而不适合学习儿童表情。同样,接受儿童数据训练的模型在成人表达分类中的表现较差。我们建议适应域,以同时对齐成人和儿童表达式在共享潜在空间中的分布,以确保对任何一个领域的稳健分类。此外,在成年子女表达分类中研究了面部图像的年龄变化,但仍无法掌握。我们从多个领域中汲取灵感,并提出深层自适应面部表情,以融合betamix选定的地标特征(面部自我),以进行成人的面部表情分类。在文献中,基于与表达,域和身份因素的相关性,beta分布的混合物首次用于分解和选择面部特征。我们通过两对成人孩子数据集评估面对面的自我。我们提出的面对面的方法在对齐成人和儿童表情的潜在表示方面优于成人孩子转移学习和其他基线适应方法。
translated by 谷歌翻译
了解神经网络记住培训数据是一个有趣的问题,具有实践和理论的含义。在本文中,我们表明,在某些情况下,实际上可以从训练有素的神经网络分类器的参数中重建训练数据的很大一部分。我们提出了一种新颖的重建方案,该方案源于有关基于梯度方法的训练神经网络中隐性偏见的最新理论结果。据我们所知,我们的结果是第一个表明从训练有素的神经网络分类器中重建大部分实际培训样本的结果是可以的。这对隐私有负面影响,因为它可以用作揭示敏感培训数据的攻击。我们在一些标准的计算机视觉数据集上演示了二进制MLP分类器的方法。
translated by 谷歌翻译
超声是医学成像中第二大大量的模式。它具有成本效益,无害,便携式和在众多临床程序中常规实施。尽管如此,图像质量的特征是外观磨砂,较差的SNR和斑点噪声。对于恶性肿瘤,边缘是模糊的。因此,非常需要改善超声图像质量。我们假设使用神经网络可以通过转化为更现实的显示,该显示模仿了整个组织的解剖学切割,可以实现这一目标。为了实现此目标,最好的方法是使用一组配对图像。但是,在我们的情况下,这实际上是不可能的。因此,使用了循环生成的对抗网络(Cyclegan),以分别学习每个域性能并强制执行跨域循环一致性。用于训练的两个数据集该模型是“乳房超声图像”(BUSI)和在我们实验室获取的家禽乳腺组织样品的一组光学图像。生成的伪解剖图像可改善对病变的视觉歧视,并具有更清晰的边界定义和明显的对比度。为了评估解剖学特征的保存,超声图像中的病变和生成的伪解剖图像均自动分割和比较。这种比较得出的良性肿瘤的中位骰子得分为0.91,恶性肿瘤的骰子得分为0.70。良性和恶性肿瘤的中位病变中心误差分别为0.58%和3.27%,良性和恶性肿瘤的中值面积误差指数分别为0.40%和4.34%。总之,这些产生的伪解剖图像以更直观的方式呈现,可以增强组织解剖结构,并有可能简化诊断并改善临床结果。
translated by 谷歌翻译